Can NIC Memory in InfiniBand Benefit Communication Performance? — A Study with Mellanox Adapter

نویسندگان

  • Jiesheng Wu
  • Amith R. Mamidala
  • Dhabaleswar K. Panda
چکیده

This paper presents a comprehensive micro-benchmark performance evaluation on using NIC memory in the Mellanox InfiniBand adapter. Three main benefits have been explored, including non-blocking and high performance host/NIC data movement, traffic reduction of the local interconnect, and avoidance of the local interconnect bottleneck. Two case studies have been carried out to show how these benefits can be utilized by applications. In the first case in which the NIC memory is used as intermediate communication buffer for non-contiguous data communication, lower CPU overhead and better latency are attained. In the second case, a common communication building block, communication forwarding chain, has been studied. Our results show that using the NIC memory can achieve a factor of up to 2.2 improvement over the conventional approach. To the best of our knowledge, this is the first such study to demonstrate the benefits of NIC memory in InfiniBand adapter.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand

InfiniBand is emerging as a high-performance interconnect. It is gaining popularity because of its high performance and open standard. Recently, PCI-Express, which is the third generation high-performance I/O bus used to interconnect peripheral devices, has been released. The third generation of InfiniBand adapters allow applications to take advantage of PCI-Express. PCI-Express offers very low...

متن کامل

Performance of Mellanox ConnectX Adapter on Multi-core Architectures Using InfiniBand

....................................................................................................................................................

متن کامل

Multi-connection and Multi-core Aware All-gather on Infiniband Clusters

MPI_Allgather is a collective communication operation that is intensively used in many scientific applications. Due to high data exchange volume in MPI_Allgather, efficient and scalable implementation of this operation is critical to the performance of scientific applications running on emerging multi-core clusters. Mellanox ConnectX is a modern InfiniBand host channel adapter that is able to s...

متن کامل

Analysis of the Memory Registration Process in the Mellanox InfiniBand Software Stack

To leverage high speed interconnects like InfiniBand it is important to minimize the communication overhead. The most interfering overhead is the registration of communication memory. In this paper, we present our analysis of the memory registration process inside the Mellanox InfiniBand driver and possible ways out of this bottleneck. We evaluate and characterize the most time consuming parts ...

متن کامل

An MPICH2 Channel Device Implementation over VAPI on InfiniBand

MPICH2, the successor of one of the most popular open source message passing implementations, aims to fully support the MPI-2 standard. Due to a complete redesign, MPICH2 is also cleaner, more flexible, and faster. The InfiniBand network technology is an open industry standard and provides high bandwidth and low latency, as well as reliability, availability, serviceability (RAS) features. It is...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004